As of 2016-02-26, there will be no more posts for this blog. s/blog/pba/
Showing posts with label make YJL (old blog). Show all posts

Sometimes, I like tweeting in code instead of plain words. So I tweeted this:

exec('class Me:\n def drink_beer(self): self.has_headache = True') ; me = Me() ; me.drink_beer() ; print me.has_headache #BEER

exec is a statement not a function1.

I had tried to use eval() first but it only accepts expression so exec seems to be right one to use because it accepts statement, well you can feed it with a entire source file actually. If we just give our code, then the code will be executed in current scope.

I do this is because I need to have one-liner code, you dont have \n or <br/> on Twitter3, I just wrote it for fun. If you want to use eval() or exec in real program, you must use them with care.

[1]This would work because a tuple with only one element is not a tuple but the element unless you append with , after that only element2. So it is evaluated like exec only_element.
[2]A single item tuple must have a trailing comma, such as (d,), from documentation.
[3]Actually it has \n but you have to go to HTML source code, atom feed, or json via API. In the representation of HTML, \n doesnt give your line break.

No, this is not about using position: fixed by default. Take a look at this page as example, scroll down, and see the top-left. Notice the issue metadata stick on top? As far as I know there is no pure CSS for this task, it uses onscroll to do the job.

Here is the source code and you can play with jsFiddle, try the auto-scroll button.

I didnt invent this, the original code is from an answer1 on Stack Overflow, and that seems to be from Stack Overflows code, but with a few of my own modifications. The code requires jQuery.

The #sticky-anchor is the most important part, it does two things:

I have seen some of visualizations of repositories of softwares. I saw Gource via a post on FriendFeed, its really simple to compile it and easy to use. You only need to give it the root directory of a repo if its a Hg or Git.

The followings are some of mine

1   Commands

Just for the record:

gource -640x360 --start-position 0.3 --stop-position 0.7 -s 1 --file-filter "/((Bash|ChromeExtension|GoogleAppEngine|GoogleGadget|JavaScript|Miscellaneous|Python|lastweet|twitter-python-client|twitter-tracker)/.*|Blogger/\w+\.\w+)" --output-ppm-stream - --output-framerate 30 ~/p/yjl | ffmpeg -y -b 3000K -r 30 -f image2pipe -vcodec ppm -i - brps1.mp4
gource -640x360 --stop-at-end -s 1 --output-ppm-stream - --output-framerate 30 ~/p/brps | ffmpeg -y -b 3000K -r 30 -f image2pipe -vcodec ppm -i - brps2.mp4
gource -640x360 --stop-at-end -s 1 --output-ppm-stream - --output-framerate 30 ~/p/lso | ffmpeg -y -b 3000K -r 30 -f image2pipe -vcodec ppm -i - lso.mp4
gource -640x360 --stop-at-end -s 1 --file-filter "/Blogger/brps/.*" --output-ppm-stream - --output-framerate 30 ~/p/yjl | ffmpeg -y -b 3000K -r 30 -f image2pipe -vcodec ppm -i - yjl.mp4
gource -640x360 --stop-at-end -s 1 --output-ppm-stream - --output-framerate 30 ~/p/twimonial | ffmpeg -y -b 3000K -r 30 -f image2pipe -vcodec ppm -i - twimonial.mp4
gource -640x360 --stop-at-end -s 1 --output-ppm-stream - --output-framerate 30 ~/p/lilbtn | ffmpeg -y -b 3000K -r 30 -f image2pipe -vcodec ppm -i - lilbtn.mp4

Few hours ago, I just wondered if there is a Python library which I can use jQuerys selectors to do some tasks on HTML files. PyQuery came up in search results.

A quick example explains all:

from pyquery import PyQuery as pq

d = pq(url='http://makeyjl.blogspot.com')
for ele_post in d('div.post'):
  post = pq(ele_post)
  print post('h3.post-title a').text()
  print post('div.post-body').text()
  print

It prints out all post titles and contents in this blogs first page. Its quick and simple.

A FriendFeeder posted a question about what font and size you use to code, it is an interesting question and I believe every programmer had ever tried to adjust their programming editor/IDE.

My editor is Vim and I use it in urxvt, so here is my setting:

Rxvt.font: xft:Envy Code B 10pt:style=Regular:size=10:antialias=false

The font is Envy Code B (You might also want to check up the R version), I found it is best for me after trying one after another. Before Envy Code, I had been using Terminus for a long time, its a very popular terminal font. Also I had tried Proggy Crisp and Square for a while. There are other popular programming font but few are really can look good at size 10 point. I also disable the anti-aliasing because its no good for programming or even in terminal. Anti-aliasing and text files would never be good friends.

Here is a screenshot of how this font looks in Vim:

http://farm5.static.flickr.com/4046/4254562507_38b5a5300b_o.png

I also need to mention the color, I use the color scheme named koehler and the background color of my urxvt is actually not pure black but #242424, I feel its better the just pure black, more softer, I think.

So, whats your font?

Just finished it. A random serie of binary digits background, post content area is wider. I also moved out the entire CSS to external Google Sites. I like it better!

I also added a new poll (I used it for the first time) at sidebar.

The following screenshot is the old one, I got it from some website directory (I forgot to take the old one before I switched the new one :( though I still have old template layout, but I am extremely lazy):




The following screenshot is just for a record :

See the image by yourself:


I have Opera 10 and Firefox 3.5 to show you how I don't like the select box in Chrome (4.0.275.0, Chromium) on Linux. You can see the dropdown box is shorter than text input and the button. It creates strange visual distraction, quite annoying. The others two browsers have same tall UI elements, I don't know what's wrong with Chrome.

The HTML code is:
<!doctype html>
<html>
<head>
<title>Hello HTML</title>
</head>
<body>
<p>Hello World!</p>
<input type="text" value="Text Input"/>
<input type="button" value="Click on me!"/>
<select>
<option>This is always shorter and I hate that!</option>
<option>Option2</option>
</select>
</body>
</html>

I don't have any other WebKit-based browser installed, therefore I really don't know if this is Chrome's problem or Webkit's. I hope it will have same visual appearance in further releases. I couldn't find an issue on its list about, but I am also lazy to create one. So give me a link, I will star it.
Updated on 2010-01-17 with Chromium 4.0.295.0:

It's still the same story...

chromium-4.0.295.0

I tried to fix the Content-Type of a feed in my app, I read the Response Class documentation and it had a line of code (didnt try Expires)):

self.response.headers.add_header('Expires', expires_str)

Needless to say I adopted it and made it to be:

self.response.headers.add_header('Content-Type', 'application/rss+xml; charset=utf-8')

It works on Development Server but not on Production Server, so I also tried after I read wsgiref.headers:

self.response.headers.add_header('content-type', 'application/rss+xml', charset='utf-8')

Still no luck.

I searched on discussion group, then I found everyone coded in this way (and I actually didnt like this way):

self.response.headers['Content-Type'] = 'application/rss+xml; charset=utf-8'

And it works. I have no idea whats going on Production server and I didnt bother to look into it.

I wrote a Python script to get a word cloud of my tweets:

https://farm5.staticflickr.com/4032/4207786266_9e85802af2_o.png

I use Twitter Backup to get my tweets in text file, then run my script to generate a list which you can use on Wordle.nets Advanced.

If you have a common.txt, that would help to remove some common words in result, but you can remove in Wordle, too.

PS. Twitter only allows you to access last 3,200 tweets, so go to back up as soon as possible! (I lost almost 400 tweets, hope I can get them back someday)

Warning

The projects are dead and some links have been removed from this post. (2015-12-02T00:35:06Z)

Twimonial and Let Secrets Out (LSO) are my 7th and 6th GAE apps, it created both in a month. I decided to post about them because I couldnt get anyone to use them.

You can read about why I created LSO in its blog, the code is licensed under the modified BSD. Here is a screenshot of it:

https://lh6.googleusercontent.com/-omMX0E2efTM/Sya5PyOkUvI/AAAAAAAACYI/ib3Dnms1hbU/s800/gae-gallery-screenshot.png

As the title describes, its a place let you post your secrets, anonymously. I believe I created for good of the world, but I just could get it to the people who need it.

Twimonial is an webapp to let you read or add testimonial about other Twitter users. I got the idea when I saw someone tweet a Follow Friday recommendation. Its just a list of username, I wondered would anyone really follow by just seeing that? I really doubted, at least, I would not follow. And a screenshot of it:

https://lh5.googleusercontent.com/-2wwZgfYquEc/Sya5WVbpRZI/AAAAAAAACYM/CoBu7vOD0dE/s800/screenshot.png

So I thought what if I could read more about those Twitter users? Then, that is testimonial. And here comes Twimonial. The code is not released, its not because I didnt want to. Its because why I wasted my time again. Not really much people are interested in my stuff. The only code I got someone (probably only one) to use is BRPS, which might be the only thing I could say I made it! I think I should also mention a failure of mine, I Thank.

If you are interested in participating or giving feedback, free feel to contact me or leave a comment. If you would use any of them, I just want to say you are the best!

PS. I also submitted a link to reddit for Twimonial.

Note

Someone asked if I could make Twimonial support Identi.ca and I did but it is a separate app called Dentimonial. Go check out if you are a Identi.ca user. (2009-12-16)

https://lh3.googleusercontent.com/-aBJbS_QjOhw/SyghgeYpCNI/AAAAAAAACYQ/V4A4lSYCZHU/s800/screenshot.png

This is another not so useful script of mine. It is a Bash script and it gathers the search results counts via Google/Yahoo/Bing APIs to make an historical chart of specified keyword using the Annotated Timeline (with no annotations, :-))of Google Visualization API.

I made this script, search-result-count.sh, because I wanted to have a historical chart of this keyword livibetter. Yep, I searched my nickname regularly, I admit it! I like watching the number of result climbing up, which would make me feel better. :-D

I wasn't planning of using Yahoo and Bing because their APIs require AppIDs, the IDs will be out to apparently public if I use Bash to write this. I don't like it, but I couldn't resist to see the result counts from them.

Because it is still new, I could not have much data to show you. The following chart was collected about two weeks. (Bing results was not included)



The following is the screenshot of rendered HTML page:

I googled myself and made a chart. on Twitpic

Please aware of few things if you want to use this script:
  • You can use cron to run it regularly, several times a day. Don't worry, it will only update the data file once a day.
  • It will only update when three counts from three search engine are available. If any of them couldn't return the result, you may have a missing data. But it should be okay, the counts do not change much from day to day.

Have you ever tried to write a simple JavaScript code, like a bookmarklet? Or tried to do something with local files from Firefox?

Normally, the safety policy forbids such behaviors, but if you have a local web server you can get around it. (No! Changing the policy is the most dangerous and stupidest way to get around it, and it is the best way to put yourself in danger.)

So, I remember Python has such module, I was thinking to write a quick one. But you can just do:
cd /path/to/root_files_of_your_web_server/
python -m SimpleHTTPServer

It does the job, you can access via http://localhost:8000/. You can change the port by appending the new port number.

Is this safe? Only if you know your firewall setting. Make sure the port you choose is random enough and/or you do not open that port to public. This quick SimpleHTTPServer listens to all addresses. And create a special directory just for this server, it only contains enough files you need. Do run it at your home directory or root unless you are 100% sure what you are doing.

This article, Updating Your Model's Schema, is already great and clear, but it does not have a complete code example. I decided to make one and write down some explanations. Just in case I might need it later.

It has one two stages to remove a property from a data model:
  1. Inherit from db.Expando if the model does not inherit from that.
  2. Remove the obsolete property from model definition.
  3. Delete the attribute, the property, of each entity del entity.obsolete
  4. Inherit from db.Model if the model originally inherited from.

How to actually do it:

Assume a model look like:
class MyModel(db.Model):
foo = db.TextProperty()
obsolete = db.TextProperty()

Re-define the model to:
class MyModel(db.Expando):
#class MyModel(db.Model):
foo = db.TextProperty()
# obsolete = db.TextProperty()

Make sure the model inherit from db.Expando and comment out (or just delete the line) the obsolete property.

Here is the example code to delete the attribute, the property:

from google.appengine.runtime import DeadlineExceededError

def del_obsolete(self):

count = 0
last_key = ''
try:
q = MyModel.all()
cont = self.request.get('continue')
if cont:
q.filter('__key__ >=', db.Key(cont))
q.order('__key__')
entities = q.fetch(100)
while entities:
for entity in entities:
last_key = str(entity.key())
try:
del entity.obsolete
except AttributeError:
pass
entity.put()
count += 1
q.filter('__key__ >', entities[-1].key())
entities = q.fetch(100)
except DeadlineExceededError:
self.response.out.write('%d processed, please continue to %s?continue=%s' % (count, self.request.path_url, last_key))
return
self.response.out.write('%d processed, all done.' % count)

Note that this snippet is to be used as a webapp.RequestHandler's get method, so it has self.response.

It use entities' keys to walk through every entity, it is efficient and safe. But you may also want to put your application under maintenance, preventing other code to add new entities, even though the values of keys seem to be increased only for new entities, but you really don't need to waste CPU time since new entities has no obsolete property.

Because it have to go through all entities and therefore it takes a lot of time to process. A mechanism to continue the process on the rest of entities is necessary. The code will catch google.appengine.runtime.DeadlineExceededError if it can not finish in one request, it then return a link which allows you to continue if you follow it. If you have lots of entities, you may want to use task instead of manual continuation. You may also want to set up the maximal amount of processing entities like 1000 entities in one request.

Once it has done its job, change the model definition back to db.Model and remove obsolete property line:
class MyModel(db.Model):
foo = db.TextProperty()


That's it.

I need to count how many entity of kind Blog has boolean property accepted set to True, but I suddenly realized that OFFSET in query is no use for me (In fact, it is not really useful).

In SDK 1.1.0, OFFSET does what you think on Development Server if you first use GAE and have experience of SQL, but it's still different than on Production Server.

Basically, if you have 1002 entities in Blog and you want to get the 1002nd entity. The follows will not get you that entity:
q = Blog.all()
# Doing filter here
# Order here
# Then fetch
r = q.fetch(1, 0)[0] # 1st
r = q.fetch(1, 1)[0] # 2nd
r = q.fetch(1, 999)[0] # 1000th
r = q.fetch(1, 1000)[0] # 1001st
r = q.fetch(1, 1001)[0] # 1002nd

You will get an exception on the last one like:
BadRequestError: Offset may not be above 1000.
BadRequestError: Too big query offset.
First one is on Production Sever, second is on Development Server.

The OFFSET takes effective after:
  1. filter data (WHERE clause)
  2. sort data (ORDER clause)
  3. truncate to first 1001 entities (even though count() only returns 1000 at most)
After filtering, sorting, truncating to first 1001 entities, then you can have your OFFSET. If you have read Updaing Your Model's Schema, it warns you:
A word of caution: when writing a query that retrieves entities in batches, avoid OFFSET (which doesn't work for large sets of data) and instead limit the amount of data returned by using a WHERE condition.
The only way is to filtering data (WHERE clause), you will need a unique property if you need to walk through all entities.

An amazing thing is you don't need to create new property, there is already one in all of you Kinds, the __key__ in query, the Key.

The benefits of using it:
  • No additional property,
  • No additional index (Because it's already created by default), and
  • Combination of two above, you don't need to use additional datastore quota. Index and Property use quota.
Here is a code snippet that I use to count Blog entities, you should be able to adapt it if you need to process data:
def get_count(q):
r = q.fetch(1000)
count = 0
while True:
count += len(r)
if len(r) < 1000:
break
q.filter('__key__ >', r[-1])
r = q.fetch(1000)
return count

q = db.Query(blog.Blog, keys_only=True)
q.order('__key__')
total_count = get_count(q)

q = db.Query(blog.Blog, keys_only=True)
q.filter('accepted =', True)
q.order('__key__')
accepted_count = get_count(q)

q = db.Query(blog.Blog, keys_only=True)
q.filter('accepted =', False)
q.order('__key__')
blocked_count = get_count(q)

Note that
  • Remove keys_only=True if you need to process data. And you will need to use r[-1].key() to filter.
  • Add a resuming functionality because it really uses a lot of CPU time if it works on large set of data.

I just download the data from one of my App Engine application by following Uploading and Downloading, I used this new and experimental bulkloader.py to download data into a sqlite3 database. You don't need to create the Loader/Exporter classes with this new method

It does explain how to download and upload, but, as for, uploading is only for production server. You have to look into the command line options, it's not complicated.

Here is a complete example to dump data:
$ python googleappengine/python/bulkloader.py --dump --kind=Kind --url=http://app-id.appspot.com/remote_api --filename=app-id-Kind.db /path/to/app.yaml/
[INFO ] Logging to bulkloader-log-20091111.001712
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Opening database: bulkloader-progress-20091111.001712.sql3
[INFO ] Opening database: bulkloader-results-20091111.001712.sql3
[INFO ] Connecting to brps.appspot.com/remote_api
Please enter login credentials for app-id.appspot.com
Email: [email protected]
Password for [email protected]:
.[INFO ] Kind: No descending index on __key__, performing serial download
.......................................................................................................................................................................................
.................................
[INFO ] Have 2160 entities, 0 previously transferred
[INFO ] 2160 entities (0 bytes) transferred in 134.6 seconds

And the following is for upload to Development Server using the sqlite3 database which we just download (not the CSV):
$ python googleappengine/python/bulkloader.py --restore --kind=Kind --url=http://localhost:8080/remote_api --filename=app-id-Kind.db --app_id=app-id
[INFO ] Logging to bulkloader-log-20091111.004013
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Opening database: bulkloader-progress-20091111.004013.sql3
Please enter login credentials for localhost
Email: [email protected] <- This does not matter, type anything
Password for [email protected]: <- Does not matter
[INFO ] Connecting to localhost:8080/remote_api
[INFO ] Starting import; maximum 10 entities per post
........................................................................................................................................................................................................................
[INFO ] 2160 entites total, 0 previously transferred
[INFO ] 2160 entities (0 bytes) transferred in 31.3 seconds
[INFO ] All entities successfully transferred

You will need to specify the app id, which must match the Development server is running on.

This may be no need once the bulkloader.py is stable.

Someone asked why does "help(import)" not work? I know the reason but it's not why I wanted to write about here. One reply exposed that I didn't know much about help. It shows a usage, that I had never known before:
help('import')

You can pass a string type, I also thought help is just printing out __doc__. And yes string also has __doc__, but why would you do that? Why would you want to get __doc__ of an instance of int, str, list, etc? So I never tried to pass a string to help.

Therefore I didn't known I could even get help about keywords. Moreover, I thought help was a function, which is not after I dug in. help is an instance of site._Helper. site module will be loaded automatically when you fire up Python interactive shell. Once it load, the help in shell is an instance of site._Helper.

If you invoke help without any arguments, help(), this will bring you to interactive help, I had never tried to use help without passing an object before.

This is actually invoking site._Helper.__call__, which is an instance method, means the instance of site._Helper is callable, and that's the way you get into interactive help.

site._Helper also has overridden __repr__ method, if you just type help and hit enter. The interactive shell will actually invoke this __repr__ method, and that's how we get this hint
Type help() for interactive help, or help(object) for help about object.

Note this does not directly mention that you can use help('string'), where string could be a module name, a keyword, or a topic. But you can know it from the message after you quit interactive help:
>>> help()

Welcome to Python 2.6! This is the online help utility.

If this is your first time using Python, you should definitely check out
the tutorial on the Internet at http://docs.python.org/tutorial/.

Enter the name of any module, keyword, or topic to get help on writing
Python programs and using Python modules. To quit this help utility and
return to the interpreter, just type "quit".

To get a list of available modules, keywords, or topics, type "modules",
"keywords", or "topics". Each module also comes with a one-line summary
of what it does; to list the modules whose summaries contain a given word
such as "spam", type "modules spam".

help> quit

You are now leaving help and returning to the Python interpreter.
If you want to ask for help on a particular object directly from the
interpreter, you can type "help(object)". Executing "help('string')"
has the same effect as typing a particular string at the help> prompt.

Maybe this is my excuse that I did know help better.

pxss.py is a replacement of PyXSS/src/__init__.py, but not entire PyXSS. You can have IdleTracker, XSSTracker, and get_info(), and thats all.

It accesses libXss.so via ctypes. You only need to put it with your script without installation or compilation.

A quick example of getting the idle time:

import pxss
print pxss.get_info().idle, 'ms'

The get_info() returns the same data as in PyXSS.

If you have another display, you should be able to pass it (after you open it) with other necessary variables to get_info():

def get_info(p_display=None, default_root_window=None, p_info=None):

and get the XScreenSaverInfo.

I made this is for my another helper script, its quality is very poor. If you are interested in ctypes, this script might give your some ideas. But this is only my second time to use ctypes. My first time was on Windows for accessing GDI+.

I have been writing a program to show quotes from Yahoo Finance service. After a few searches I know Matplotlib has matplotlib.widgets.Cursor to do the task, here is the example code. Its not the kind of cursor we want, the cursor in such program must to snap its horizontal line to the price in figure.

So this snap version example could fit the need. This cursor manually draw the cursor. It works fine if your figure only have one or two data lines. If you use something like matplotlib.finance.CandleSticks, which plot many things on your figure. You will see the lag of movement of your cursor.

The first example has a way to deal with that, its called Blit, you can read more about it at this page. Basically, you save the rendered image, every time you need to draw your cursor, you restore that saved image, then you draw your cursor. That would save a lot of time. There is a another example code for Blit.

I wrote my own example as following:

#!/usr/bin/env python


import datetime

import matplotlib
matplotlib.use('GTKAgg')
from matplotlib.figure import Figure
from matplotlib.backends.backend_gtkagg import FigureCanvasGTKAgg as FigureCanvas
import matplotlib.finance as finance
import matplotlib.mlab as mlab

from numpy import searchsorted, array

try:
  import pygtk
  pygtk.require("2.0")
except:
  print "You need to install pyGTK or GTKv2 or set your PYTHONPATH correctly"
  sys.exit(1)

import gtk


# Modified from http://matplotlib.sourceforge.net/examples/pylab_examples/cursor_demo.html
class SnaptoCursor:

    def __init__(self, ax, x, y, useblit=True):

        self.ax = ax
        self.lx = None
        self.ly = None
        self.x = x
        self.y = y
        self.bg = None
        self.useblit = useblit

    def mouse_move(self, event):

        if not event.inaxes:
          return

        ax = event.inaxes
        minx, maxx = ax.get_xlim()
        miny, maxy = ax.get_ylim()

        if self.useblit and self.bg is None:
          self.bg = ax.figure.canvas.copy_from_bbox(ax.bbox)
        ax.figure.canvas.restore_region(self.bg)

        x, y = event.xdata, event.ydata

        indx = searchsorted(self.x, [x])[0]
        if indx == len(self.x):
          indx = len(self.x) - 1
        x = self.x[indx]
        y = self.y[indx]
        # update the line positions
        if self.lx is not None:
          self.lx.set_data((minx, maxx), (y, y))
          self.ly.set_data((x, x), (miny, maxy))
        else:
          color = 'b-' if self.useblit else 'r-'
          self.lx, = ax.plot((minx, maxx), (y, y), color)  # the horiz line
          self.ly, = ax.plot((x, x), (miny, maxy), color)  # the vert line

        if self.useblit:
          ax.draw_artist(self.lx)
          ax.draw_artist(self.ly)
          ax.figure.canvas.blit(ax.bbox)
        else:
          ax.figure.canvas.draw()


def create_figure(quotes):

  f = Figure(figsize=(5,4), dpi=100)

  a = f.add_subplot(111)
  canvas = FigureCanvas(f)  # a gtk.DrawingArea
  canvas.set_size_request(800,300)

  a.xaxis_date()

  finance.candlestick(a, quotes, width=0.5)

  return f


def main():

  win = gtk.Window()
  win.connect('destroy', gtk.main_quit)
  win.set_title('Cursors')

  vbox = gtk.VBox()
  win.add(vbox)

  # Get data from Yahoo Finance
  enddate = datetime.date.today()
  startdate = enddate + datetime.timedelta(days=-72)
  quotes = finance.quotes_historical_yahoo('GOOG', startdate, enddate)

  qa = array(quotes)

  f = create_figure(quotes)
  a = f.gca()
  vbox.pack_start(gtk.Label('No Blit'), False, False)
  vbox.pack_start(f.canvas)

  cursor1 = SnaptoCursor(a, qa[:,0], qa[:,2], useblit=False)
  f.canvas.mpl_connect('motion_notify_event', cursor1.mouse_move)

  f = create_figure(quotes)
  a = f.gca()
  vbox.pack_start(gtk.Label('Use Blit'), False, False)
  vbox.pack_start(f.canvas)

  cursor2 = SnaptoCursor(a, qa[:,0], qa[:,2], useblit=True)
  f.canvas.mpl_connect('motion_notify_event', cursor2.mouse_move)

  win.show_all()
  gtk.main()


if __name__ == '__main__':

  main()

A quick screenshot12:

[1]http://xs1144.xs.to/xs1144/09434/matplotlib-cursor371.png is gone.
[2]http://xs1144.xs.to/xs1144/09434/matplotlib-cursor661.png is gone.

I wrote this to get familiar with GTK, Glade and matplotlib. This post is not a walkthrough or tutorial of using either of them. I wanted to write down some notes. The Flu Data Viewer is an example or a code I could copy from for other coding (So this code is put in Public Domain), so it would unlikely be updated in the future. The flu data is from Google Flu Trends.

This program draws one or more countries flu data in one figure. It downloads data from Google and saves data to fludata.csv in current directory. I was thinking to do some process, but I didnt know what I can do and I wasnt really interested in this.

Here is a screenshot:

https://lh5.googleusercontent.com/-1oNMHueeAkc/Stru_HVWiyI/AAAAAAAACSM/wWLlmj1qvHc/s640/2009-10-18-174725_905x630_scrot.png

You can download the code at Google Code.

1   Window.visible

When I first time use Glade to create the UI, I didnt know the window.visible is False by default, so you have to either set it in Glade or run window.show().

2   gtk.glade.XML(gladefile, windowname)

windowname must match window.name in Glade. Its obvious but I thought one Glade XML one window, I set it to something else, therefore my window never show up. One Glade XML could have many windows.

3   missing int type data in csv

3.1   Loading using matplotlib.mlab.csv2rec

d = dict(zip(fields, [lambda value: int(value) if value != '' else nan]*len(fields)))
self.rec = mlab.csv2rec(CSV_FILENAME, converterd=d)

Where fields is a list of field names. If a data is missing, it will be an empty string '', I think using NaN (Not a Number, numpy.nan) may be a better idea to represent it than 0 (zero).

There is another way to deal with missing data by giving missingd to csv2rec, but the version of matplotlib on my computer doesnt have it. It should like converterd but with values which you want to assign when data is missing.

3.2   Feeding to gtk.TreeView

If you set int type to gtk.ListStore, then it will not accept numpy.nan. So str type may be okay to use.

4   Multiselection mode in gtk.TreeView

treeview.get_selection().set_mode(gtk.SELECTION_MULTIPLE)

5   Dont forget NavigationToolbar

Users still need a way to zoom in/out.

from matplotlib.backends.backend_gtkagg import NavigationToolbar2GTKAgg as NavigationToolbar

vbox.pack_start(canvas, True, True)
vbox.pack_start(NavigationToolbar(canvas, window), False, False)

I hope I could find a way to do data line tracing. Its hard to tell Y=? when X=123.

I just tried to add two entity counts to my app's statistics page. Then I found out, the statistics APIreleased on 10/13/2009, version 1.2.6is not available for development server.

You can run the following code without errors:
from google.appengine.ext.db import stats
global_stat = stats.GlobalStat.all().get()

But global_stat is always None.

So I ended up with a code as follows:
db_blog_count = memcache.get('db_blog_count')
if db_blog_count is None:
blog_stat = stats.KindStat.all().filter('kind_name =', 'Blog').get()
if blog_stat is None:
db_blog_count = 'Unavailable'
else:
db_blog_count = blog_stat.count
memcache.set('db_blog_count', db_blog_count, 3600)

The documentation didn't explicit mention whether if the statistics is available for development server or notmaybe I didn't read carefully, neither did Release Notes.

PS. I know the code is awful, str / int types mixed, terrible. But I am lazy to add and if clause in template file to check if db_blog_count is None or something like -1, or anything represents the data is not available.

PS2. The code should be just if blog_stat: (fourth line) and swap the next two statements if you know what I meant.